341 research outputs found

    How does our motor system determine its learning rate?

    Get PDF
    Motor learning is driven by movement errors. The speed of learning can be quantified by the learning rate, which is the proportion of an error that is corrected for in the planning of the next movement. Previous studies have shown that the learning rate depends on the reliability of the error signal and on the uncertainty of the motor system’s own state. These dependences are in agreement with the predictions of the Kalman filter, which is a state estimator that can be used to determine the optimal learning rate for each movement such that the expected movement error is minimized. Here we test whether not only the average behaviour is optimal, as the previous studies showed, but if the learning rate is chosen optimally in every individual movement. Subjects made repeated movements to visual targets with their unseen hand. They received visual feedback about their endpoint error immediately after each movement. The reliability of these error-signals was varied across three conditions. The results are inconsistent with the predictions of the Kalman filter because correction for large errors in the beginning of a series of movements to a fixed target was not as fast as predicted and the learning rates for the extent and the direction of the movements did not differ in the way predicted by the Kalman filter. Instead, a simpler model that uses the same learning rate for all movements with the same error-signal reliability can explain the data. We conclude that our brain does not apply state estimation to determine the optimal planning correction for every individual movement, but it employs a simpler strategy of using a fixed learning rate for all movements with the same level of error-signal reliability

    Spatially valid proprioceptive cues improve the detection of a visual stimulus

    Get PDF
    Vision and proprioception are the main sensory modalities that convey hand location and direction of movement. Fusion of these sensory signals into a single robust percept is now well documented. However, it is not known whether these modalities also interact in the spatial allocation of attention, which has been demonstrated for other modality pairings. The aim of this study was to test whether proprioceptive signals can spatially cue a visual target to improve its detection. Participants were instructed to use a planar manipulandum in a forward reaching action and determine during this movement whether a near-threshold visual target appeared at either of two lateral positions. The target presentation was followed by a masking stimulus, which made its possible location unambiguous, but not its presence. Proprioceptive cues were given by applying a brief lateral force to the participant’s arm, either in the same direction (validly cued) or in the opposite direction (invalidly cued) to the on-screen location of the mask. The d′ detection rate of the target increased when the direction of proprioceptive stimulus was compatible with the location of the visual target compared to when it was incompatible. These results suggest that proprioception influences the allocation of attention in visual spac

    A neural surveyor to map touch on the body

    Get PDF
    Perhaps the most recognizable sensory map in all of neuroscience is the somatosensory homunculus. Although it seems straightforward, this simple representation belies the complex link between an activation in a somatotopic map and the associated touch location on the body. Any isolated activation is spatially ambiguous without a neural decoder that can read its position within the entire map, but how this is computed by neural networks is unknown. We propose that the somatosensory system implements multilateration, a common computation used by surveying and global positioning systems to localize objects. Specifically, to decode touch location on the body, multilateration estimates the relative distance between the afferent input and the boundaries of a body part (e.g., the joints of a limb). We show that a simple feedforward neural network, which captures several fundamental receptive field properties of cortical somatosensory neurons, can implement a Bayes-optimal multilateral computation. Simulations demonstrated that this decoder produced a pattern of localization variability between two boundaries that was unique to multilateration. Finally, we identify this computational signature of multilateration in actual psychophysical experiments, suggesting that it is a candidate computational mechanism underlying tactile localization

    What autocorrelation tells us about motor variability: Insights from dart throwing

    Get PDF
    In sports such as golf and darts it is important that one can produce ballistic movements of an object towards a goal location with as little variability as possible. A factor that influences this variability is the extent to which motor planning is updated from movement to movement based on observed errors. Previous work has shown that for reaching movements, our motor system uses the learning rate (the proportion of an error that is corrected for in the planning of the next movement) that is optimal for minimizing the endpoint variability. Here we examined whether the learning rate is hard-wired and therefore automatically optimal, or whether it is optimized through experience. We compared the performance of experienced dart players and beginners in a dart task. A hallmark of the optimal learning rate is that the lag-1 autocorrelation of movement endpoints is zero. We found that the lag-1 autocorrelation of experienced dart players was near zero, implying a near-optimal learning rate, whereas it was negative for beginners, suggesting a larger than optimal learning rate. We conclude that learning rates for trial-by-trial motor learning are optimized through experience. This study also highlights the usefulness of the lag-1 autocorrelation as an index of performance in studying motor-skill learning

    How many motoric body representations can we grasp?

    Get PDF
    At present there is a debate on the number of body representations in the brain. The most commonly used dichotomy is based on the body image, thought to underlie perception and proven to be susceptible to bodily illusions, versus the body schema, hypothesized to guide actions and so far proven to be robust against bodily illusions. In this rubber hand illusion study we investigated the susceptibility of the body schema by manipulating the amount of stimulation on the rubber hand and the participant’s hand, adjusting the postural configuration of the hand, and investigating a grasping rather than a pointing response. Observed results showed for the first time altered grasping responses as a consequence of the grip aperture of the rubber hand. This illusion-sensitive motor response challenges one of the foundations on which the dichotomy is based, and addresses the importance of illusion induction versus type of response when investigating body representations

    The weight of representing the body: addressing the potentially indefinite number of body representations in healthy individuals

    Get PDF
    There is little consensus about the characteristics and number of body representations in the brain. In the present paper, we examine the main problems that are encountered when trying to dissociate multiple body representations in healthy individuals with the use of bodily illusions. Traditionally, task-dependent bodily illusion effects have been taken as evidence for dissociable underlying body representations. Although this reasoning holds well when the dissociation is made between different types of tasks that are closely linked to different body representations, it becomes problematic when found within the same response task (i.e., within the same type of representation). Hence, this experimental approach to investigating body representations runs the risk of identifying as many different body representations as there are significantly different experimental outputs. Here, we discuss and illustrate a different approach to this pluralism by shifting the focus towards investigating task-dependency of illusion outputs in combination with the type of multisensory input. Finally, we present two examples of behavioural bodily illusion experiments and apply Bayesian model selection to illustrate how this different approach of dissociating and classifying multiple body representations can be applied

    Integration of length and curvature in haptic perception

    Get PDF
    We investigated if and how length and curvature information are integrated when an object is explored in one hand. Subjects were asked to explore four types of objects between thumb and index finger. Objects differed in either length, curvature, both length and curvature correlated as in a circle, or anti-correlated. We found that when both length and curvature are present, performance is significantly better than when only one of the two cues is available. Therefore, we conclude that there is integration of length and curvature. Moreover, if the two cues are correlated in a circular cross-section instead of in an anti-correlated way, performance is better than predicted by a combination of two independent cues. We conclude that integration of curvature and length is highly efficient when the cues in the object are combined as in a circle, which is the most common combination of curvature and length in daily life

    Neuromotor Noise, Error Tolerance and Velocity-Dependent Costs in Skilled Performance

    Get PDF
    In motor tasks with redundancy neuromotor noise can lead to variations in execution while achieving relative invariance in the result. The present study examined whether humans find solutions that are tolerant to intrinsic noise. Using a throwing task in a virtual set-up where an infinite set of angle and velocity combinations at ball release yield throwing accuracy, our computational approach permitted quantitative predictions about solution strategies that are tolerant to noise. Based on a mathematical model of the task expected results were computed and provided predictions about error-tolerant strategies (Hypothesis 1). As strategies can take on a large range of velocities, a second hypothesis was that subjects select strategies that minimize velocity at release to avoid costs associated with signal- or velocity-dependent noise or higher energy demands (Hypothesis 2). Two experiments with different target constellations tested these two hypotheses. Results of Experiment 1 showed that subjects chose solutions with high error-tolerance, although these solutions also had relatively low velocity. These two benefits seemed to outweigh that for many subjects these solutions were close to a high-penalty area, i.e. they were risky. Experiment 2 dissociated the two hypotheses. Results showed that individuals were consistent with Hypothesis 1 although their solutions were distributed over a range of velocities. Additional analyses revealed that a velocity-dependent increase in variability was absent, probably due to the presence of a solution manifold that channeled variability in a task-specific manner. Hence, the general acceptance of signal-dependent noise may need some qualification. These findings have significance for the fundamental understanding of how the central nervous system deals with its inherent neuromotor noise

    Sources of variability in interceptive movements

    Get PDF
    In order to successfully intercept a moving target one must be at the right place at the right time. But simply being there is seldom enough. One usually needs to make contact in a certain manner, for instance to hit the target in a certain direction. How this is best achieved depends on the exact task, but to get an idea of what factors may limit performance we asked people to hit a moving virtual disk through a virtual goal, and analysed the spatial and temporal variability in the way in which they did so. We estimated that for our task the standard deviations in timing and spatial accuracy are about 20 ms and 5 mm. Additional variability arises from individual movements being planned slightly differently and being adjusted during execution. We argue that the way that our subjects moved was precisely tailored to the task demands, and that the movement accuracy is not only limited by the muscles and their activation, but also-and probably even mainly-by the resolution of visual perception

    How the required precision influences the way we intercept a moving object

    Get PDF
    Do people perform a given motor task differently if it is easy than if it is difficult? To find out, we asked subjects to intercept moving virtual targets by tapping on them with their fingers. We examined how their behaviour depended on the required precision. Everything about the task was the same on all trials except the extent to which the fingertip and target had to overlap for the target to be considered hit. The target disappeared with a sound if it was hit and deflected away from the fingertip if it was missed. In separate sessions, the required precision was varied from being quite lenient about the required overlap to being very demanding. Requiring a higher precision obviously decreased the number of targets that were hit, but it did not reduce the variability in where the subjects tapped with respect to the target. Requiring a higher precision reduced the systematic deviations from landing at the target centre and the lag-one autocorrelation in such deviations, presumably because subjects received information about smaller deviations from hitting the target centre. We found no evidence for lasting effects of training with a certain required precision. All the results can be reproduced with a model in which the precision of individual movements is independent of the required precision, and in which feedback associated with missing the target is used to reduce systematic errors. We conclude that people do not approach this motor task differently when it is easy than when it is difficult. © 2013 Springer-Verlag Berlin Heidelberg
    corecore